|
Proximal gradient methods are a generalized form of projection used to solve non-differentiable convex optimization problems. Many interesting problems can be formulated as convex optimization problems of form : where are convex functions defined from where some of the functions are non-differentiable, this rules out our conventional smooth optimization techniques like Steepest descent method, conjugate gradient method etc. There is a specific class of algorithms which can solve above optimization problem. These methods proceed by splitting, in that the functions are used individually so as to yield an easily implementable algorithm. They are called proximal because each non smooth function among is involved via its proximity operator. Iterative Shrinkage thresholding algorithm, projected Landweber, projected gradient, alternating projections, alternating-direction method of multipliers, alternating split Bregman are special instances of proximal algorithms. Details of proximal methods are discussed in Combettes and Pesquet.〔 〕 For the theory of proximal gradient methods from the perspective of and with applications to statistical learning theory, see proximal gradient methods for learning. == Notations and terminology == Let , the -dimensional euclidean space, be the domain of the function . Suppose is a non-empty convex subset of . Then, the indicator function of is defined as : : -norm is defined as ( ) : The distance from to is defined as : If is closed and convex, the projection of onto is the unique point such that . The subdifferential of is given by : 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Proximal gradient method」の詳細全文を読む スポンサード リンク
|